Feature - E-L solid particle models#1297
Closed
JA0307 wants to merge 2598 commits intoMFlowCode:masterfrom
Closed
Conversation
Co-authored-by: Daniel Vickers <danieljvickers@login11.frontier.olcf.ornl.gov> Co-authored-by: Daniel Vickers <danieljvickers@login08.frontier.olcf.ornl.gov>
This PR is to expand the SG EoS to its general form. Such a change is needed for the PC model to work on both 5 and 6 equation models
Including a suggestion from coderabbitai, effectively initializing qv_avg to input parameter on 's_compute_speed_of_sound'
Added configuration options to reduce walkthrough noise and remove 'Finishing Touches' section content.
…and MPI communication Features: - Lagrangian bubble movement with projection-based void fraction smearing - Kahan summation for accurate void fraction boundary conditions - Extended MPI communication for moving EL bubbles - New 2D and 3D moving Lagrangian bubble examples - Updated test cases and golden files
This commit adds normal collision forces. The point is to checkpoint the code before merging with upstream. Collisions are not yet ready for use. Need to fix beta buffer filling first.
…e/moving-solid-particles
Buffer communication of beta is consistent with how the bubble solver fills beta. There is an initial implementation of collisions, but this is not yet ready for use. Various files that were different from master were updated to match.
…feature/moving-solid-particles
Total gaussian weight computation is now done in the particle dynamics loop Removed WENO reconstruction of gaussian smeared particle quantities for gradient fields and using fornberg finite differences instead. WENO is overkill and a time sink. Re-added fornberg finite difference weight computation.
Particle Solver:
1. Call rhs functions
1.1 Particle Dynamics subroutine
GPU loop across all particles:
- Compute fluid force on the particle
- If two way coupled, spread particle force and volume fraction across surrounding cells with gaussian
- If collisions are off, update particle velocities and acceleration
- Finalize volume fraction field by filling buffers
- If collisions are on, call collision subroutine to compute collision forces with nearby particles and with any solid boundaries. Then update particle velocities/acceleration
- Collision subroutine:
- Ghost particles temporarily added to arrays
- GPU loop across all particles:
- Check for overlap with neighbor cells, including ghost particles
- Collision force for particle pair only computed once
- Each particle gets checked for wall collision
- Communicate forces applied to ghost particles
1.2 Particles Source Subroutine
GPU loop across all domain cells:
- Update fluid solver rhs with particle source terms for mass, momentum, energy
2. Call Particle RK update
- Update particle positions/velocity
- Transfer particles across ranks
- Enforce boundary conditions on the particles. Some overlap is allowed with solid walls to allow the collision force to handle moving the particle away from the wall naturally.
- Update particle volume fraction field (zero out the field variables, compute the gaussian contribution for normalization, apply the gaussian kernel, communicate beta field).
Optimization:
- particle volume fraction and source term contributions at each cell are updated with an atomic update in the gaussian kernel function. Using particle cell linked lists makes this much slower.
- Particle dynamics particle loop handles the fluid force and the gaussian kernel for projection all in one. This reduced the number of GPU kernels launched and reuses particle data already in memory.
Contributor
|
The git history on this branch is trashed. @JA0307 you'll need to close this PR and run the following commands to extract just the relevant changes you've made and put them on a new branch, and then open a PR with that branch. I trashed the git history on the fork that you based this code on, which is why the commit count and line change count are so absurdly high. In the future, I would recommend not forking forks of MFC and instead working from the master branch. I know this was a bit of a special circumstance with the EL MPI code not being merged yet. |
Author
|
Makes sense, thank you! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Solid Particle Implementation for E-L Solver
This update expands upon the pre-existing (in development) E-L solver for bubble dynamics to include solid particle dynamics. This is in support of the PSAAP center, which requires the capability to model solid particle dynamics in MFC.
Type of change : New feature, Refactor
Testing
The solver has been tested by running various 2D/3D problems involving fluid-particle interactions, such as spherical blasts surrounded by a layer of particles, shock-particle curtains, collision tests, etc.
The inputs to the EL solid particle solver have all been turned on/off to verify they work independent of each other, and together.
The code has been tested for CPU and GPU usage. The GPU usage has been tested on Tuolumne.
Two new files have been added:
File 1 has the main particle dynamics subroutines. This initializes the particles, computes fluid forces, coupling terms, computes collision forces, enforces boundary conditions, and writes the data for post-processing.
File 2 has the gaussian kernel projection code and the subroutine to compute the force on the particle due to the fluid. This compute the quasi-steady drag force, pressure gradient force, added mass force, stokes drag, gravitational force. Models for the quasi-steady drag are implemented here.
Checklist
Details:
In progress
Details:
GPU results not matching CPU results. Still deciphering which part of the model this is due to.
AI code reviews
@coderabbitai full review